24 research outputs found

    Compiler Support for Sparse Tensor Computations in MLIR

    Full text link
    Sparse tensors arise in problems in science, engineering, machine learning, and data analytics. Programs that operate on such tensors can exploit sparsity to reduce storage requirements and computational time. Developing and maintaining sparse software by hand, however, is a complex and error-prone task. Therefore, we propose treating sparsity as a property of tensors, not a tedious implementation task, and letting a sparse compiler generate sparse code automatically from a sparsity-agnostic definition of the computation. This paper discusses integrating this idea into MLIR

    javab -- a prototype . . .

    No full text

    javar: A prototype Java restructuring compiler

    No full text
    In this paper we show how implicit parallelism in Java programs can be made explicit by a restructuring compiler using the multi-threading mechanism of the language. In particular, we focus on automatically exploiting implicit parallelism in loops and multi-way recursive methods. Expressing parallelism in Java itself clearly has the advantage that the transformed program remains portable. After compilation of the transformed Java program into bytecode, speedup can be obtained on any platform on which the implementation of the JVM (Java Virtual Machine) supports the true parallel execution of threads. Moreover, we will see that the transformations presented in this paper only induce a slight overhead on uni-processors

    Compilation Techniques for Sparse Matrix Computations

    No full text
    The problem of compiler optimization of sparse codes is well known and no satisfactory solutions have been found yet. One of the major obstacles is formed by the fact that sparse programs deal explicitly with the particular data structures selected for storing sparse matrices. This explicit data structure handling obscures the functionality of a code to such a degree that the optimization of the code is prohibited, e.g. by the introduction of indirect addressing. The method presented in this paper postpones data structure selection until the compile phase, thereby allowing the compiler to combine code optimization with explicit data structure selection. Not only enables this method the compiler to generate efficient code for sparse computations, also the task of the programmer is greatly reduced in complexity. Index Terms: Compilation Techniques, Optimization, Program Transformations, Restructuring Compilers, Sparse Computations, Sparse Matrices. 1 Introduction A significant part of ..

    Abstract A Strategy for Exploiting Implicit Loop Parallelism in Java Programs

    No full text
    In this paper, we explore a strategy that can be used by a source to source restructuring compiler to exploit implicit loop parallelism in Java programs. First, the compiler must identify the parallel loops in a program. Thereafter, the compiler explicitly expresses this parallelism in the transformed program using the multithreading mechanism of Java. Finally, after a single compilation of the transformed program into Java byte-code, speedup can be obtained on any platform on which the Java byte-code interpreter supports actual concurrent execution of threads, whereas threads only induce a slight overhead for serial execution. In addition, this approach can enable a compiler to explicitly express the scheduling policy of each parallel loop in the program.

    A Strategy for Exploiting Implicit Loop Parallelism in Java Programs

    No full text
    In this paper, we explore a strategy that can be used by a source to source restructuring compiler to exploit implicit loop parallelism in Java programs. First, the compiler must identify the parallel loops in a program. Thereafter, the compiler explicitly expresses this parallelism in the transformed program using the multithreading mechanism of Java. Finally, after a single compilation of the transformed program into Java byte-code, speedup can be obtained on any platform on which the Java byte-code interpreter supports actual concurrent execution of threads, whereas threads only induce a slight overhead for serial execution. In addition, this approach can enable a compiler to explicitly express the scheduling policy of each parallel loop in the program. 1 Introduction One of the important design goals of the Java programming language was to provide a truly architectural neutral language, so that Java applications could run on any platform. To achieve this objective, a Java program ..
    corecore